Goto

Collaborating Authors

 enhance cheap operation


GhostNetV2: Enhance Cheap Operation with Long-Range Attention (Supplementary Material) Y ehui T ang

Neural Information Processing Systems

GhostNetV2 is compatible with different detection models as a universe backbone. Table A1: Results of object detection on MS COCO dataset. Networks [5, 4] is used as the detection head. We compare the calculation process of decoupled attention and full attention in Figure A1. The visualization of attention map is shown in Figure 6 of the main paper.


GhostNetV2: Enhance Cheap Operation with Long-Range Attention

Neural Information Processing Systems

Light-weight convolutional neural networks (CNNs) are specially designed for applications on mobile devices with faster inference speed. The convolutional operation can only capture local information in a window region, which prevents performance from being further improved. Introducing self-attention into convolution can capture global information well, but it will largely encumber the actual speed. In this paper, we propose a hardware-friendly attention mechanism (dubbed DFC attention) and then present a new GhostNetV2 architecture for mobile applications. The proposed DFC attention is constructed based on fully-connected layers, which can not only execute fast on common hardware but also capture the dependence between long-range pixels.